Goto

Collaborating Authors

 Cook Inlet


Where are the Whales: A Human-in-the-loop Detection Method for Identifying Whales in High-resolution Satellite Imagery

Robinson, Caleb, Goetz, Kimberly T., Khan, Christin B., Sackett, Meredith, Leonard, Kathleen, Dodhia, Rahul, Ferres, Juan M. Lavista

arXiv.org Artificial Intelligence

Effective monitoring of whale populations is critical for conservation, but traditional survey methods are expensive and difficult to scale. While prior work has shown that whales can be identified in very high-resolution (VHR) satellite imagery, large-scale automated detection remains challenging due to a lack of annotated imagery, variability in image quality and environmental conditions, and the cost of building robust machine learning pipelines over massive remote sensing archives. We present a semi-automated approach for surfacing possible whale detections in VHR imagery using a statistical anomaly detection method that flags spatial outliers, i.e. "interesting points". We pair this detector with a web-based labeling interface designed to enable experts to quickly annotate the interesting points. We evaluate our system on three benchmark scenes with known whale annotations and achieve recalls of 90.3% to 96.4%, while reducing the area requiring expert inspection by up to 99.8% -- from over 1,000 sq km to less than 2 sq km in some cases. Our method does not rely on labeled training data and offers a scalable first step toward future machine-assisted marine mammal monitoring from space. We have open sourced this pipeline at https://github.com/microsoft/whales.


Fine-Scale Soil Mapping in Alaska with Multimodal Machine Learning

Lin, Yijun, Chen, Theresa, Brungard, Colby, Sabine, Grunwald, Ives, Sue, Macander, Matt, Nawrocki, Timm, Chiang, Yao-Yi, Jelinski, Nic

arXiv.org Artificial Intelligence

Fine-scale soil mapping in Alaska, traditionally relying on fieldwork and localized simulations, remains a critical yet underdeveloped task, despite the region's ecological importance and extensive permafrost coverage. As permafrost thaw accelerates due to climate change, it threatens infrastructure stability and key ecosystem services, such as soil carbon storage. High-resolution soil maps are essential for characterizing permafrost distribution, identifying vulnerable areas, and informing adaptation strategies. We present MISO, a vision-based machine learning (ML) model to produce statewide fine-scale soil maps for near-surface permafrost and soil taxonomy. The model integrates a geospatial foundation model for visual feature extraction, implicit neural representations for continuous spatial prediction, and contrastive learning for multimodal alignment and geo-location awareness. We compare MISO with Random Forest (RF), a traditional ML model that has been widely used in soil mapping applications. Spatial cross-validation and regional analysis across Permafrost Zones and Major Land Resource Areas (MLRAs) show that MISO generalizes better to remote, unseen locations and achieves higher recall than RF, which is critical for monitoring permafrost thaw and related environmental processes. These findings demonstrate the potential of advanced ML approaches for fine-scale soil mapping and provide practical guidance for future soil sampling and infrastructure planning in permafrost-affected landscapes. The project will be released at https://github.com/knowledge-computing/Peatland-permafrost.


WikiVideo: Article Generation from Multiple Videos

Martin, Alexander, Kriz, Reno, Walden, William Gantt, Sanders, Kate, Recknor, Hannah, Yang, Eugene, Ferraro, Francis, Van Durme, Benjamin

arXiv.org Artificial Intelligence

We present the challenging task of automatically creating a high-level Wikipedia-style article that aggregates information from multiple diverse videos about real-world events, such as natural disasters or political elections. Videos are intuitive sources for retrieval-augmented generation (RAG), but most contemporary RAG workflows focus heavily on text and existing methods for video-based summarization focus on low-level scene understanding rather than high-level event semantics. To close this gap, we introduce WikiVideo, a benchmark consisting of expert-written articles and densely annotated videos that provide evidence for articles' claims, facilitating the integration of video into RAG pipelines and enabling the creation of in-depth content that is grounded in multimodal sources. We further propose Collaborative Article Generation (CAG), a novel interactive method for article creation from multiple videos. CAG leverages an iterative interaction between an r1-style reasoning model and a VideoLLM to draw higher level inferences about the target event than is possible with VideoLLMs alone, which fixate on low-level visual features. We benchmark state-of-the-art VideoLLMs and CAG in both oracle retrieval and RAG settings and find that CAG consistently outperforms alternative methods, while suggesting intriguing avenues for future work.


Advancing Large Language Models for Spatiotemporal and Semantic Association Mining of Similar Environmental Events

Tian, Yuanyuan, Li, Wenwen, Hu, Lei, Chen, Xiao, Brook, Michael, Brubaker, Michael, Zhang, Fan, Liljedahl, Anna K.

arXiv.org Artificial Intelligence

Retrieval and recommendation are two essential tasks in modern search tools. This paper introduces a novel retrieval-reranking framework leveraging Large Language Models (LLMs) to enhance the spatiotemporal and semantic associated mining and recommendation of relevant unusual climate and environmental events described in news articles and web posts. This framework uses advanced natural language processing techniques to address the limitations of traditional manual curation methods in terms of high labor cost and lack of scalability. Specifically, we explore an optimized solution to employ cutting-edge embedding models for semantically analyzing spatiotemporal events (news) and propose a Geo-Time Re-ranking (GT-R) strategy that integrates multi-faceted criteria including spatial proximity, temporal association, semantic similarity, and category-instructed similarity to rank and identify similar spatiotemporal events. We apply the proposed framework to a dataset of four thousand Local Environmental Observer (LEO) Network events, achieving top performance in recommending similar events among multiple cutting-edge dense retrieval models. The search and recommendation pipeline can be applied to a wide range of similar data search tasks dealing with geospatial and temporal data. We hope that by linking relevant events, we can better aid the general public to gain an enhanced understanding of climate change and its impact on different communities.


Transformer for Object Re-Identification: A Survey

Ye, Mang, Chen, Shuoyi, Li, Chenyue, Zheng, Wei-Shi, Crandall, David, Du, Bo

arXiv.org Artificial Intelligence

Object Re-Identification (Re-ID) aims to identify and retrieve specific objects from varying viewpoints. For a prolonged period, this field has been predominantly driven by deep convolutional neural networks. In recent years, the Transformer has witnessed remarkable advancements in computer vision, prompting an increasing body of research to delve into the application of Transformer in Re-ID. This paper provides a comprehensive review and in-depth analysis of the Transformer-based Re-ID. In categorizing existing works into Image/Video-Based Re-ID, Re-ID with limited data/annotations, Cross-Modal Re-ID, and Special Re-ID Scenarios, we thoroughly elucidate the advantages demonstrated by the Transformer in addressing a multitude of challenges across these domains. Considering the trending unsupervised Re-ID, we propose a new Transformer baseline, UntransReID, achieving state-of-the-art performance on both single-/cross modal tasks. Besides, this survey also covers a wide range of Re-ID research objects, including progress in animal Re-ID. Given the diversity of species in animal Re-ID, we devise a standardized experimental benchmark and conduct extensive experiments to explore the applicability of Transformer for this task to facilitate future research. Finally, we discuss some important yet under-investigated open issues in the big foundation model era, we believe it will serve as a new handbook for researchers in this field.


Web scraping and text analysis in R and GGplot2 – A.Z. Andis Arietta

#artificialintelligence

I recently needed to learn text mining for a project at work. I generally learn more quickly with a real-world project. So, I turned to a topic I love: Wilderness, to see how I could apply the skills of text scrubbing and natural language processing. You can clone my Git repo for the project or follow along in the post below. The first portion of this post will cover web scraping, then text mining, and finally analysis and visualization.


AI Being Tapped to Understand What Whales Say to Each Other - AI Trends

#artificialintelligence

AI is being applied to whale research, especially to understand what whales are trying to communicate in the audible sounds they make to each other in the ocean. For example, marine biologist Shane Gero has worked to match clicks coming from whales around the Caribbean island nation of Dominica, to behavior he hopes will reveal the meanings of the sounds they make. Gero is a behavioral ecologist affiliated with the Marine Bioacoustics Lab at Aarhus University in Denmark, and the Department of Biology of Dalhousie University of Halifax, Nova Scotia. Gero works with a team from Project CETI, a nonprofit that aims to apply advanced machine learning and state-of-the-art robotics to listen to and translate the communication of whales. Project CETI has recently announced a five-year effort to build on Gero's work with a research project to try to decipher what sperm whales are saying to each other, according to a recent account in National Geographic.


Uncommon machine learning examples that challenge what you know - Dataconomy

#artificialintelligence

Machine learning (ML) is how a system learns and adapts its processes from the patterns found in large amounts of data. When we think of machine learning, some prominent examples come to mind. For instance, the way product recommendations on Amazon are eerily similar to Google searches you've done. The scope of machine learning extends far beyond what we know of and see in our daily lives. Since machine learning is a relatively new field, the limits of its application are constantly pushed outward.


Not So Common Machine Learning Examples That Challenge Your Knowledge

#artificialintelligence

Machine Learning refers to the process through which a computer learns and changes its operations based on patterns identified in vast quantities of data. When we think about machine learning, we think of a few well-known instances. For example, the way Amazon recommends products is remarkably similar to Google searches you've done. Machine learning's reach is far broader than what we are familiar with and observes in our daily lives. Because machine learning is such a young science, the boundaries of its applicability are continuously being pushed outside. Virtual personal assistants were once the stuff of fantasies, but now they can be found in every other home.


Researchers in Norway test using underwater robots with fin-like flaps to guard fish farms

Daily Mail - Science & tech

Researchers in Norway are testing how salmon in a commercial fish farm might react to being regularly monitored by an underwater robots. While fish farms are typically uneventful environments, they still require oversight to ensure the captive fish are safe and healthy, a task most commercial fish farms assign to a human diver. Maarja Kruusmaa and a team of researchers at the Norwegian University of Science and Technology wanted to test how fish would respond to being watched over by robots instead of people. 'The happier the fish are, the healthier the fish are, the better they eat, the better they grow, the less parasites they have and the less they get sick,' Kruusmaa told New Scientist. The team used two different underwater robots to test whether the fish would react differently based on the size and propulsion method.